1 research outputs found

    The Train Benchmark: cross-technology performance evaluation of continuous model queries

    Get PDF
    In model-driven development of safety-critical systems (like automotive, avionics or railways), well- formedness of models is repeatedly validated in order to detect design flaws as early as possible. In many indus- trial tools, validation rules are still often implemented by a large amount of imperative model traversal code which makes those rule implementations complicated and hard to maintain. Additionally, as models are rapidly increas- ing in size and complexity, efficient execution of validation rules is challenging for the currently available tools. Checking well-formedness constraints can be captured by declarative queries over graph models, while model update operations can be specified as model transformations. This paper presents a benchmark for systematically assessing the scalability of validating and revalidating well-formedness constraints over large graph models. The benchmark defines well-formedness validation scenarios in the railway domain: a metamodel, an instance model generator and a set of well- formedness constraints captured by queries, fault injection and repair operations (imitating the work of systems engi- neers by model transformations). The benchmark focuses on the performance of query evaluation, i.e. its execution time and memory consumption, with a particular empha- sis on reevaluation. We demonstrate that the benchmark can be adopted to various technologies and query engines, including modeling tools; relational, graph and semantic databases. The Train Benchmark is available as an open- source project with continuous builds from https://github. com/FTSRG/trainbenchmark
    corecore